home
***
CD-ROM
|
disk
|
FTP
|
other
***
search
/
The World of Computer Software
/
The World of Computer Software.iso
/
pccts.zip
/
update10.ms
< prev
next >
Wrap
Text File
|
1992-12-08
|
33KB
|
906 lines
.de 1s
.nr PS 11
.nr VS 13
.br
.LP
..
.de 2s
.nr PS 11
.nr VS 16
.br
.LP
..
.de cB
.nr PS 9
.nr VS 11
.br
.LP
.KS
.LD
.ft CW
..
.de cE
.DE
.nr PS 11
.nr VS 16
.br
.KE
.ft R
..
.2s
.ce
\s+4The Purdue Compiler Construction Tool Set\s-4
.ce
\s+2Version 1.06 Update\s-2
.ce
\fIDecember 1, 1992\fP
.NH
Enhancements
.PP
This section describes the update of PCCTS Version 1.00 to Version
1.06 (1.01-1.05 were internal releases). The list of changes here is
not complete since many \*Qlittle\*U fixes were made (actually, the
predicated parsing feature is major). Here, we concentrate on the
enhancements to the system; file \f(CWBUGS100\fP contains a list of
the bug fixes incorporated in the 1.06 release. In addition, note
that the manual has not been updated and will not be until the next
major release (or until we get around to it).
.NH 2
Predicated Parsing \(em \fBALPHA Version\fP
.PP
Normally, parsing is a function of syntax alone. Semantics
(meaning/value of the input) are not taken into account when parsing
decisions are made\(emcontext-free parsing. Typically, languages, such
as C++, have some rules which require the lexical analyzer to consult
the symbol table to determine what token (e.g. classname verses
enumerated name verses typedef name) is given to the parser. However,
lexical analyzers have no context information except the current token
of lookahead and must be coerced via flags to yield the various token
types. Ad hoc approaches become increasingly difficult to implement
as the number of ambiguities (due to lack of semantic information) in
a grammar rises.
.PP
Context dictates the scope of variables in programming languages.
Because only the parser has context information, it is reasonable to
assume that the parser is the correct place to maintain and access the
symbol table; there are other situtations in which it is useful to
have context direct parsing. Unfortunately, most parser generators do
not allow grammar actions to influence the parse; i.e. parsing is a
function of syntax alone. Because PCCTS generates recursive-descent
LL(k) parsers in C, allowing user-defined C actions to influence
parsing is a straightforward and natural addition to the PCCTS
description meta-language. Although bottom-up parsing strategies can
allow actions to direct parsing, bottom-up parser have much less
semantic information; i.e. LR and LALR techniques allow only
\fIS\fP-attributed grammars whereas LL allows \fIL\fP-attributed
grammars (for example, LL always knows which set of rules it is
currently trying to match whereas LALR does not, in general; see your
favorite book on parsing theory).
.PP
In this section, we introduce a rudimentary version of predicates,
with a full implementation to appear in future releases. To support
context-sensitive parsing PCCTS 1.06 allows a new action type called
\fIpredicate\fP (\f(CW-pr\fP must be used), which is denoted:
.cB
<<predicate>>?
.cE
.LP
or, optionally,
.cB
<<predicate>>?[fail_action]
.cE
.LP
where the second notation \*Qpasses\*U an argument of an action to the
predicate operator \*Q\(CW?\fP\*U, which is executed upon predicate
failure. A predicate is a C action that evaluates to either true
(success) or false (failure) and can be used for both \fIsemantic
validation\fP and for \fIdisambiguating\fP syntactic conflicts in the
underlying grammar.
.PP
Validation predicates are used as simple tests and are functionally
equivalent to the following normal action:
.cB
if ( !predicate ) {\fIerror message; exit rule\fP}
.cE
.LP
or, when a fail action is present:
.cB
if ( !predicate ) {fail_action}
.cE
.LP
For example,
.cB
a : A <<pred>>? B
;
.cE
.LP
Matches \f(CWA\fP, evaluates \f(CWpred\fP, and matches \f(CWB\fP if
\f(CWpred\fP is true else an error is reported (i.e. \*Q\f(CWfailed
predicate 'pred'\fP\*U) and rule \f(CWa\fP is exited before matching
\f(CWB\fP. To generate a more useful error message, the user may
specify a fail action:
.cB
a : A <<pred>>?[fail] B
;
.cE
.LP
which executes \f(CWfail\fP when \f(CWpred\fP evaluates to false.
.PP
A predicate which validates a production can, at the same time, be
used to disambiguate a syntactically ambiguous grammar structure. To
be a candidate, a predicate must appear on the extreme left edge of a
production. Disambiguating predicates are used in the alternative
prediction expressions to enable or disable certain alternative
productions of a block (rule or subrule). Parsing in this new
environment can be viewed in this way:
.IP "\fIstep 1: Disable invalid productions\fR"
An \fIinvalid\fP production is a production whose disambiguating
predicate evaluates to false.
.IP "\fIstep 2: Parse as normal block\fP"
Once a list of \fIvalid\fP productions has been found, they are parsed
as a normal rule or subrule.
.LP
Essentially, disambiguating predicates \*Qgate\*U alternative
productions in and out depending on their \*Qapplicability\*U.
Productions without predicates have an implied predicate of
\f(CW<<TRUE>>?\fP; i.e. they are always valid.
.PP
A single predicate can be used as both a validation and a
disambiguating predicate\(emANTLR determines which way to use them
automatically. The role taken by a predicate depends on its location
in the grammar. Predicates are all considered validation predicates
as they must always evaluate to true for parsing of the enclosing
production to continue. However, when a grammatical ambiguity is
discovered, ANTLR searches for \fIvisible\fP predicates to
disambiguate the construct. A visible predicate is one which can be
evaluated in a production prediction expression without consuming a
token or executing an action (except initialization actions); e.g. all
disambiguating predicates appear on the left edge of some production.
Fail actions are not used in prediction expressions because a failed
predicate disables a production; it does not report a syntax error
unless no other \fIviable\fP predicates are present; a viable
production is one which is predicted by the lookahead. The action of
moving a predicate into a prediction expression is called
\fIhoisting\fP; currently, at most one predicate can be hoisted per
ambiguous production. In general, predicates may only be a function
of:
.IP "User variables"
This is dangerous as hoisting may evaluate the predicate in a scope
where the variable(s) does not exist; e.g. using rule parameters can
be a problem if a predicate that references them is hoisted outside of
the rule.
.IP "Attributes"
Only attributes in the left context can be used; i.e. attributes of
rules/tokens created before the predicate is evaluated.
.IP "Lookahead"
The next \fIk\fP tokens of lookahead can be tested via \f(CWLA(i)\fP,
\f(CWLATEXT(i)\fP.
.LP
Also, \fBpredicates may not have side-effects\fP (user must undo
anything that was changed if they do; in other words, they must
\*Qbacktrack\*U). See the Future Work section for information
regarding the use of predicates and extremely large lookahead buffers.
.PP
Consider the following simple grammar:
.cB
a : <<pred1>>? A B
| <<pred2>>? A C
;
.cE
.LP
This grammar is LL(2) (\f(CW-k\ 2\fP) and ANTLR would not report an
ambiguity. However, it is LL(1) ambiguous (\f(CW-k\ 1\fP) and
\f(CWantlr\ t.g\fP would generate a warning message:
.cB
t.g, line 1: warning: alts 1 and 2 of rule ambiguous upon { A }
.cE
.LP
Now, if we allow predicates to be used with \f(CWantlr\ -pr\ t.g\fP,
then ANTLR finds that both predicates are visible and can be used to
disambiguate the production; it is up to the user to ensure that the
predicates do, in fact, disambiguate the predicate. ANTLR would
generate code similar to the following:
.cB
a()
{
...
if ( pred1 && LA(1)==A ) {
}
else if ( pred2 && LA(1)==A ) {
}
else \fIerror\fP;
...
}
.cE
.LP
The following grammars are syntactically and semantically equivalent
to the above grammar, but the predicates are not hoisted trivially:
.cB
a : ( <<pred1>>? A B )
| <<pred2>>? A C
;
.cE
.LP
ANTLR looks inside the subrule of production one and finds that
\f(CWpred1\fP can be evaluated before executing an action and before
matching a token; it is hoisted for use in the prediction expression
of rule \f(CWa\fP. Predicates can be hoisted from other rules as well:
.cB
a : b
| c
;
b : <<pred1>>? A B
;
c : <<pred2>>? A C
;
.cE
.LP
Rule \f(CWa\fP would still have the same production prediction
expression. The following grammar does not have predicates that can be
hoisted to disambiguate the prediction:
.cB
a : A <<pred1>>? B /* cannot hoist past token */
| <<action>> <<pred2>>? A C /* cannot hoist past action */
;
.cE
.PP
Now, consider how a simplified version of K&R C variable and function
declarations could be specified:
.cB
decl: /* variable definition, function DECLARATION/DEFINITION */
type declarator ( func_body | ";" )
| /* function DEFINITION */
declarator func_body
;
type: "int"
| "float"
| WORD
;
declarator
: WORD { "\e(" args "\e)" }
;
func_body
: ( decl )* "\e{" (decl)* (stat)* "\e}"
;
.cE
.LP
unfortunately, rule \f(CWdecl\fP is totally ambiguous in the LL(1)
sense (and in the LL(k) sense) since \f(CWWORD\fP can begin both
alternatives of rule \f(CWdecl\fP. Increasing the lookahead to two
tokens does not help as the alternatives are ambiguous upon \f(CWWORD\
WORD\fP. Alternative one could match \f(CWmy_type\ var\fP and
alternative two could match \f(CWfunc\ my_type\fP; hence, the second
alternative matches at least one invalid sentence. This is because
the \f(CW(\ decl\ )*\fP subrule of rule \f(CWfunc_body\fP, which would
match the second \f(CWWORD\fP of \f(CWWORD\ WORD\fP, does not know if
an argument list was seen previously or not. The question of whether
to match a parameter definition list after a declarator is
context-sensitive because a function header may or may not have been
seen immediately beforehand.
.PP
One way in which this subset could be recognized is to recognize a
large superset of the language and then, using actions, verify that the
input was valid:
.cB
decl: {type} declarator ( func_body | ";" )
;
.cE
.LP
But, jamming everything together makes action introduction more
complicated. It is better to specify the input language exactly with
predicates.
.PP
To overcome the ambiguity in rule \f(CWdecl\fP and to ensure that only
a function declarator is followed by a parameter definition, we
augment the grammar as follows:
.cB
decl: <<int is_func;>>
/* variable definition, function DECLARATION/DEFINITION */
type declarator > [is_func]
( <<is_func>>?[printf("warning: declarator not func\en");]
func_body
| ";"
)
| /* function DEFINITION */
declarator > [is_func]
<<is_func>>?[printf("warning: expecting func header not %s\en", LATEXT(1));]
func_body
;
type: "int"
| "float"
| <<LA(1)==WORD ? isTYPE(LATEXT(1)) : 1>>?
WORD
;
declarator > [int is_func]
: <<LA(1)==WORD ? !isTYPE(LATEXT(1)) : 1>>?
WORD ( "\e(" <<$is_func=1;>> args "\e)" | <<$is_func=0;>> )
;
func_body
: ( decl )* "\e{" (decl)* (stat)* "\e}"
;
.cE
.LP
In rule \f(CWtype\fP, we add a predicate that ensures that, if the
lookahead is a \f(CWWORD\fP, that the \f(CWWORD\fP is, in fact, a
user-defined type; nothing can be said if the lookahead is something
else, so we return success in that case; i.e. the predicate does not
apply for that lookahead (context). If rule \f(CWdeclarator\fP is
called, a failure of the predicate will report a semantic error. In
our example, the predicate is also hoisted for use as a disambiguating
predicate in rule \f(CWdecl\fP. The predicate in rule
\f(CWdeclarator\fP ensures that, if the lookahead is a \f(CWWORD\fP,
that the \f(CWWORD\fP is not a user-defined type. As before, it is
also hoisted to rule \f(CWdecl\fP to disambiguate the second
production of that rule. Both productions of rule \f(CWdecl\fP have
predicates that are hoisted into the prediction expressions for those
productions; ANTLR assumes that the hoisted predicates
\fBcompletely\fP disambiguate the decision, which is true in our
case. The first predicate physically present in rule \f(CWdecl\fP
ensures that, if a function body is coming up, the declarator was a
function header. The second predicate in rule \f(CWdecl\fP again
ensures that a function header was seen before trying to match a
function body. The variable \f(CWis_func\fP is set to the return
value of rule \f(CWdeclarator\fP, which is set by two simple actions in
\f(CWdeclarator\fP.
.PP
Currently, the context under which predicates are evaluated may not be
correct; i.e. currently ANTLR does not compute the context from which
it hoists a predicate. For example,
.cB
a : b B
| (X Y | A C)
;
b : <<pred>>? A
;
.cE
.LP
The predicate \f(CWpred\fP will be hoisted for use in disambiguating
rule \f(CWa\fP. However, the predicate will be evaluate regardless of
whether \f(CWA\fP, which is its context, or \f(CWX\fP is on the input
stream. It may be invalid to apply the predicate when the lookahead
is \f(CWX\fP. The user may overcome this by changing \f(CWpred\fP
thus:
.cB
a : b B
| (X Y | A C)
;
b : <<LA(1)==A ? pred : TRUE>>? A
;
.cE
.LP
which will ensure that the predicate is only evaluated when \f(CWA\fP
is seen on the input stream.
.PP
Another problem area lies in that hoisting can occur in situations
that will break the semantics. For example:
.cB
a : b[3]
| A
;
b[int i]
: <<f($i)>>? A
;
.cE
.LP
The predicate \f(CWf(i)\fP will be hoisted to rule \f(CWa\fP where the
parameter \f(CWi\fP is not even defined. Either do not do this or put
a predicate (dummy or otherwise) between the decision and the
predicate you do not wish to be hoisted:
.cB
a : <<1>>? b[3]
| A
;
b[int i]
: <<f($i)>>? A
;
.cE
.PP
This is an alpha release feature in that it is not anywhere near what
we want it to do and has NOT been thoroughly tested. User beware. We
do not guarantee that the syntax of predicates will stay this way;
however, the semantics will not change. We want users to play with
predicates to find new, better or different ways of doing things. We
\fBreally\fP want user feedback on these critters before a full
implementation is developed. For example, how do you want the parser
to respond if no valid, viable production is found?
.cB
a : <<p1>>? A
| <<p2>>? A
| B
;
.cE
.LP
If the input were \f(CWC\fP, a correct error message would be reported:
.cB
line 1: syntax error at "C" missing { A B }
.cE
.LP
This is the correct message because it is truly a syntax, as opposed
to semantic, error. On the other hand, upon input \f(CWA\fP, when
predicates \f(CWp1\fP and \f(CWp2\fP are false, the parser reports the
following bizarre message:
.cB
line 1: syntax error at "A"
.cE
.LP
which is not very meaningful. The fail function, \f(CWzzFAIL\fP,
found that, in fact, \f(CWA\fP is syntactically valid and got confused
because an error was raised; this is because the fail routine has no
semantic information. A suitable definition of error reporting in the
predicate environment must be created.
.PP
Please send any suggestions/questions to \f(CWparrt@ecn.purdue.edu\fP,
\f(CWhankd@ecn.purdue.edu\fP or use the email server mechanism at
\f(CWpccts@ecn.purdue.edu\fP (send a \f(CWSubject:\fP line of
\*Qquestion antlr\*U and fill in the body of the email with your
suggestion/question).
.NH 2
1.06 Is Written in 1.00
.PP
The grammar for the PCCTS meta-language (file format) has been
implemented in Version 1.00, making heavy use of \f(CW#lexclass\fP
directives. File \f(CWlexhelp.c\fP has been eliminated due to the
superiority of 1.00 to 1.00B.
.NH 2
ANTLR Compiles Under ANSI C
.PP
Because of the rewrite of the grammar and some rewrites of ANTLR code,
ANTLR now compiles with ANSI compilers without a wimper (except for
two \*Q\f(CWunknown escape sequence\fP\*U warnings). Your mileage may
vary.
.NH 2
Grammar Files Distributed
.PP
\f(CWantlr.g\fP and \f(CWdlg_p.g\fP are now included in the source
distribution for 1.06. Note that the 1.00 PCCTS (both ANTLR and DLG)
are required to process these grammar files. DO NOT DESTROY YOUR OLD
COPY OF 1.00 PCCTS (at least save the executables).
.NH 2
Script Generates Makefiles for PCCTS Projects
.PP
A C program called \f(CWgenmk.c\fP is available in the \f(CWsupport\fP
directory of the PCCTS release which has the following usage:
.LP
\f(CWgenmk project f1.g f2.g ... fn.g\fP
.LP
It generates a makefile that creates an executable, \f(CWproject\fP,
from a set of grammar files. For example, \f(CWgenmk\ t\ t.g\fP
generates:
.cB
#
# PCCTS makefile for: t.g
#
DLG_FILE = parser.dlg
ERR_FILE = err.c
HDR_FILE = stdpccts.h
TOK_FILE = tokens.h
K = 1
ANTLR_H = .
BIN = .
ANTLR = $(BIN)/antlr
DLG = $(BIN)/dlg
CFLAGS = -I. -I$(ANTLR_H)
AFLAGS = -fe err.c -fh stdpccts.h -fl parser.dlg -ft tokens.h -k $(K)
-gk
DFLAGS = -C2 -i
GRM = t.g
SRC = scan.c t.c err.c
OBJ = scan.o t.o err.o
.cE
.cB
t: $(OBJ) $(SRC)
cc -o t $(CFLAGS) $(OBJ)
t.c parser.dlg : t.g
$(ANTLR) $(AFLAGS) t.g
scan.c : parser.dlg
$(DLG) $(DFLAGS) parser.dlg scan.c
.cE
.LP
This program is handy when beginning a new PCCTS project or when first
learning about PCCTS.
.NH 2
DLG Supports Case Insensitive Scanners
.PP
DLG has two new options which provide control over the case
sensitivity of the generated scanner. Specifically, case
insensitivity implies that when a character is referenced in a regular
expression, DLG behaves as if the user had typed both upper and lower
case versions of that character; i.e. \f(CW(\fIa\f(CW|\fIA\f(CW)\fR
where \fIa\fP is some character. The new options are:
.IP \f(CW-ci\fP
Make lexical analyzer case insensitive
.IP \f(CW-cs\fP
Make lexical analyzer case sensitive (default).
.NH 2
Delayed Lookahead Fetches in Generated Parser
.PP
Currently, PCCTS generates parsers which always have \fIk\fP tokens of
lookahead. This is done by following the strategy that another token
is fetched (\f(CWzzCONSUME\fP) when one is matched (\f(CWzzmatch\fP).
This can be a problem for actions that need to occur after a token has
been matched, but before the next token of lookahead is fetched. This
is somewhat overcome in PCCTS 1.00 by delaying the fetch if an action
is found immediately after a token reference. With the new delayed
lookahead scheme, the next token is not consumed until the next match
is required. This means that any action before the next match (not
necessarily adjacent to the previous match) will be executed before a
lookahead fetch occurs. Turn on ANTLR option \f(CW-gk\fP and DLG
option \f(CW-i\fP to enable this feature. This feature appears to
work with AST's and attributes with the constraints mentioned below in
the incompatibilities section (e.g. use of \f(CWLA(\fIi\f(CW)\fR and
\f(CWLATEXT(\fIi\f(CW)\fR has been restricted). It has been tested
with the C (\fIk>\fP1 and Pascal (\fIk==\fP1) examples provided in the
release and with several other large grammars.
.PP
This feature is primarily useful for developers of interactive tools.
Previously, it was \fBreally\fP hard to get PCCTS to generate truly
interactive tools. It appeared as if the parser was always waiting on
a token fetch rather than executing an appropriate action. E.g. in
PCCTS 1.00,
.cB
a : ( "A" "B" "C" "\en" ) <<printf("got it\en");>>
;
.cE
.LP
would not print \f(CWgot\ it\fP until one token after the newline had
been typed. PCCTS 1.06 will generate parsers that print the message
immediately upon newline and will exit without waiting for another
token as there are no token references following the action.
.PP
Another way in which delayed lookahead is useful lies in translators
which add symbols to the symbol table which must be examined by a
lexical action. If a lookahead fetch occurs too fast, the lexical
action may miss the introduction of a symbol into the symbol table.
.PP
This feature is a bit flaky for the moment\(em\f(CWLA(\fIi\f(CW)\fR
and \f(CWLATEXT(\fIi\f(CW)\fR will generally not have the same values
as when the \f(CW-gk\fP option is not used (for \fIk\fP>=2). Use
attributes to access token values; the lookahead buffer is not really
a user object. If you insist upon accessing the lookahead buffer, use
\f(CWLA(0)\fP and \f(CWLATEXT(0)\fP, which typically access the last
token matched and last text matched respectively; this is
distinguished from \f(CWLA(1)\fP, which means the next token of
lookahead. Accessing the next token of lookahead is invalid because
it will not be fetched from the input stream until needed (just before
the next decision or match).
.NH 2
Tutorials Available
.PP
With release 1.06, we are distributing both a beginning and advanced
tutorial. They have not been thoroughly \*Qdebugged\*U much are much
better than nothing:
.IP Beginning Tutorial
This tutorial introduces the basic functionality of PCCTS by example.
The user need not be familiar with parsing theory or other compiler tools,
but any familiarity reduces the learning curve substantially.
.IP Advanced Tutorial
Constructing a translator can be viewed as an iterative refinement
process moving from language recognition to intermediate-form
transformation. This tutorial presents one possible sequence of
refinements. It uses as many features of PCCTS as is reasonable
without regards to optimality. It develops a compiler for a simple
string manipulation language called \fIsc\fP. The resulting compiler
generates code for a simple stack machine.
.NH 2
Error Messages for \fIk>\fR1
.PP
Previous versions of PCCTS did not handle error message correctly for
\fIk>\fP1. For example, with two tokens of lookahead and the
following grammar:
.cB
a : "A" "B" "D"
| "A" "C" "E"
;
.cE
.LP
an incorrect input of \(CWA\ D\ D\fP would yield:
.cB
line 1: syntax error at "A" missing A
.cE
.LP
which is wrong (and incredibly confusing). The new error mechanism
generates the following error message upon the same incorrect input:
.cB
line 1: syntax error at "A D"; "D" not in { B C }
.cE
.LP
which is infinitely superior. Unfortunately, situations may arise
when even this method will give an invalid message. This may occur
when alternatives have lookahead sequences which are permutations of
the same tokens.
.PP
The definition of the standard error reporting function,
\f(CWzzsyn()\fP has been modified. The parameter list is now:
.cB
void
zzsyn(char *text,
int tok,
char *egroup,
unsigned *eset,
int etok,
int k,
char *bad_text);
.cE
.LP
Users can ignore this as it is transparent to them; unless, of course,
the standard error reporting must be modified. In addition,
\f(CWzzFAIL\fP is now a function rather than a macro.
.NH 2
Trace Facility has Exit Macro
.PP
Previously, only an entry trace macro was inserted in parsers when the
\f(CW-gd\fP ANTLR option was used. An exit macro has been defined
which resulted in \f(CWzzTRACE\fP becoming \f(CWzzTRACEIN\fP. Also, a
default trace macro prints out \*Q\f(CWenter\ rule\ \fIrule\fR\*U if no
default trace macros are defined. To define your own, the macro
definitions must appear in the \f(CW#header\fP action. As before, the
sole argument to the trace routines is a string representing the rule
which has been entered or is about to be exited.
.NH 2
Resource Limitation
.PP
Occasionally, ANTLR is unable to analyze a grammar submitted by the
user. This rare situation can only occur when the grammar is large
and the amount of lookahead is greater than one. A nonlinear analysis
algorithm is used by PCCTS to handle the general case of \fILL(k)\fP
parsing. The average complexity of analysis, however, is near linear
due to some fancy footwork in the implementation which reduces the
number of calls to the full \fILL(k)\fP algorithm.
.PP
To avoid the situation where ANTLR takes 23 hours of CPU time and then
runs out of virtual memory, use the \f(CW-rl\ \fIn\fR resource limit
option where \fIn\fP is the maximum number of tree nodes to be used by
the analysis algorithm. An error message will be displayed, if this
limit is reached, which indicates which grammar construct was being
analyzed when ANTLR hit a non-linearity. Use this option if ANTLR
seems to go off to lunch and your disk start swapping; try
\fIn\fP=10000 to start. Once the offending construct has been
identified, try to remove the ambiguity that ANTLR was trying to
overcome with large lookahead analysis. Future versions of PCCTS may
incorporate a known algorithm that does not exhibit this exponential
behavior.
.NH 2
Rule Prefix Option
.PP
An ANTLR option has been added that prefixes all functions
corresponding to rules with a prefix. This can be used to provide
symbol hiding in your project to isolate the parser. It can also be
used to allow rule names that correspond to C keywords such as
\(CWif\fR and \f(CWtypedef\fP.
.NH 2
Standard PCCTS Header
.PP
Two new ANTLR options have been added that control the creation of a
standard PCCTS header file \(em \f(CWstdpccts.h\fP. Option
\f(CW-gh\fP instructs ANTLR to create a file, \f(CWstdpccts.h\fP
unless \f(CW-fh\fP is used, which contains all header information
needed by non-PCCTS generated files that want to access PCCTS
symbols. For example, it indicates the \fIk\fP of the parser, whether
trees are being constructed, whether lookahead is to be delayed, and
indicates what the user specified in the \f(CW#header\fP action in the
grammar file. Previously, the user had to manually construct this
information from the grammar file in order to place the information in
a non-PCCTS C file. The \f(CW-fh\fP option is merely used to rename
\f(CWstdpccts.h\fR.
.NH 2
Doubly-Linked AST's
.PP
A new function is available in \f(CWast.c\fP which will doubly-link
any tree that is passed in. To use this option, the user must define
zzAST_DOUBLE in the \f(CW#header\fR directive or on the command-line
of the C compile. This defines \f(CWleft\fP and \f(CWup\fP fields
automatically in the AST node typedef. ANTLR generated parsers,
normally, only construct singly-linked trees. The fields can be
filled in via code similar to the following:
.cB
#header <<
...
#define AST_FIELDS \fIuser-fields\fP;
>>
<<
main()
{
AST *root = NULL;
ANTLR(start(&root), stdin);
zzdouble_link(root, NULL, NULL);
}
.cE
.LP
where the function is defined as:
.cB
zzdouble_link(AST *t, AST *left, AST *up);
.cE
.NH 2
C++ Compatible Parsers
.PP
PCCTS parsers may now be compiled with C++ compilers; i.e. the output
is more ANSI C-like than before. It has been successfully compiled
with GCC 2.2, but not with GCC 1.37. We do not guarantee anything.
To be safe, use the \f(CW-ga\fP option so that PCCTS generates
ANSI-style prototypes for functions generated from rules. As a simple
example, consider:
.cB
#header <<
#include "charbuf.h"
/* stdio.h is included, by default, but doesn't seem to bother stream.h */
#include <stream.h>
#include <stdlib.h>
>>
#token "[\e \et\en]" <<zzskip();>>
<<
main()
{
ANTLR(a(), stdin);
cout << "end of C++ test\en";
}
>>
a : "A" "B" << cout << $1.text << $2.text << "\en"; >>
;
.cE
.LP
which does not do much:
.cB
% t
A B
AB
end of C++ test
%
.cE
.LP
but it does compile with G++ 2.2 except that a warning is generated
concerning \f(CWstrncpy()\fP not being declared before use. This is
trivial to fix, of course \(em by modifying the \f(CWcharbuf.h\fP
file. We compiled this with:
.cB
antlr -k 2 -gk -ga t.g
Antlr parser generator Version 1.06 1989-1992
dlg -C2 -i parser.dlg scan.c
dlg Version 1.06 1989-1992
g++ -I. -I../1.06/h -g -c scan.c
g++ -I. -I../1.06/h -g -c t.c
t.c: In function `struct Attrib zzconstr_attr (int, char *)':
t.c:19: warning: implicit declaration of function `strncpy'
g++ -I. -I../1.06/h -g -c err.c
g++ -o t -I. -I../1.06/h -g scan.o t.o err.o
.cE
.LP
We anticipate a rewrite to be more C++ sometime in the future.
.NH
Acknowledgements
.PP
We acknowledge Dan Lawrence of MDBS for the new error reporting
facility concerning greater than one token of lookahead; Dana Hoggatt,
also of MDBS, suggested the rule prefix idea (\f(CW-gp\fP option) and
beta tested 1.06. We thank Ed Harfmann of MDBS for creating the
\f(CWmakefile.os2\fP files and porting it to the PC. We acknowledge
the following beta testers for 1.06 (alphabetically): Thomas Buehlman
(buehlman@iwf.mabp.ethz.ch), Peter Dahl (dahl@everest.ee.umn.edu),
Chris Song (dsong@ncsa.uiuc.edu), Ariel Tamches (tamches@cs.UMD.EDU).
We reference Russell Quong (quong@ecn.purdue.edu) of Purdue EE for his
work with us on defining and refining predicates. Ariel Tamches
(tamches@cs.UMD.EDU) deserves attention for hacking on the particulars
of the alpha-release predicates.
.NH
Machine Compatibility
.PP
PCCTS Version 1.06 has been tested on the following platforms:
.IP "Sun 3/60"
.IP "Sun SPARC I, II"
.IP "Encore Multimax running Umax 4.3"
.IP "Sun sparcstation IPX"
.IP "NeXTstation"
.IP "Decstation 3100 running Ultrix 4.2"
.IP "DEC 5000"
.IP "Linux on 386PC"
.IP "Microsoft C 6.0 on PC OS/2, DOS"
.IP "CSET/2 C compiler on PC OS/2"
.IP "IBM RS6000"
.IP "MIPS r2000"
.NH
Incompatibilities
.PP
Due to the bug fixes in 1.06, \*Qnew\*U ambiguities may appear in your
grammar. These were always there\(emANTLR simply did not find them.
The analysis is more correct.
.PP
Calls to \f(CWzzTRACE\fP are no longer generated by the \f(CW-gd\fP
option. Now, \f(CWzzTRACEIN\fP, \f(CWzzTRACEOUT\fP are called at the
beginning and end of functions, respectively.
.PP
The way in which PCCTS translates actions has been changed; before
they were parsed with a C function, now the \f(CW#lexclass\fP facility
is being used. Some differences in translation may be discovered;
e.g. a character may need to be escaped with \f(CW\e\fP whereas before
the simple character was sufficient.
.PP
The user should no longer set the next token of lookahead or the text
of the next token in the lexical analyzer using \f(CWLA(1)\fP and
\f(CWLATEXT(1)\fP. This is incompatible with the \f(CW-gk\fP option;
hence, \f(CWNLA\fP and \f(CWNLATEXT\fP should be used instead where
the \f(CWN\fP means \*Qnext\*U.
.PP
The \f(CW-ga\fP does not generate anything different as the code
generator now dumps both K&R and ANSI with \f(CW#ifdef\fP's around the
ANSI code.
.PP
Previously, no prototype was given when \f(CW-ga\fP was off. Now,
prototypes are always generated (with the appropriated
\f(CW#ifdef\fP's). These prototypes can conflict with the outside
environment if the rule names are things like \f(CWif\fP and
\f(CWstat\fP (which is a system call). Use the \f(CW-gp\ \fIprefix\fR
option to prefix all functions corresponding to rules with a string
of your choice.
.NH
Future Work:
.PP
Predicates are still under development. We expect an enhanced version
of PCCTS that computes context and hoists more aggressively.
.PP
Often a grammar construct cannot be left factored to remove an
ambiguity. This typically arises in the situation that the common
prefix can be arbitrarily long. Fortunately, input is typically
finite and one could scan past these constructs given enough
lookahead. This is not the same thing as backtracking as the parser
never backs up; it simply looks ahead really far to make a decision.
This can easily be handled with a predicate of the form:
.cB
<<is_this_found_ahead(\fIcontext\fP)>>?
.cE
.LP
which would look ahead in the lookahead buffer to see if the
\fIcontext\fP occurred within some finite number of tokens. This
concept is very similar to LR-Regular (LRR) parsers for those familiar
with parsing theory. Note that this is a very fast, cheap way to get
something resembling the power of backtracking.
.PP
Attribute names are expected to be enhanced. For example, instead of
$i, $\fItoken_name\fP could be used:
.cB
a : WORD TYPE <<printf("%s %s\en", $WORD, $TYPE);>>
;
.cE
.PP
We expect to have a graphical user interface to PCCTS sometime in the
future which allows entry of grammars using syntax diagram notation.
The interface is expected to run under X Windows.
.PP
We anticipate a version that supports object-oriented programming and
generates C++ instead of ANSI C. For the moment, PCCTS is compatible
with C++, but future versions will support C++.
.PP
Future versions, both C and C++, will be able to refer to PCCTS
symbols by \f(CWpccts.\fIsymbol\fR instead of \f(CWzz\fIsymbol\fR.
E.g. \f(CWzzskip()\fP will become \f(CWpccts.skip()\fP.
.PP
DLG will soon use lookahead of its own to allow the recognition of
more complicated expressions; specifically, those which have
left substrings in common with other regular expressions.
.PP
We expect future versions of PCCTS to dump grammar analysis and parser
construction statistics such as how many rules required 1 token of
lookahead, how many ambiguities etc...